With the rapid development of Internet of Vehicles (IoV), smart connected vehicles generate a large number of latency-sensitive and computation-intensive tasks, and limited vehicle computing resources and traditional cloud service modes cannot meet the needs of in-vehicle users. Mobile Edge Computing (MEC) provides an effective paradigm for solving task offloading of massive data. However, when considering multi-task and multi-user scenarios, the complexity of task offloading scenarios in IoV is high due to the real-time and dynamic changes in vehicle locations, task types and vehicle density, and the offloading process is prone to problems such as unbalanced edge resource allocation, excessive communication cost overhead and slow algorithm convergence. To solve the above problems, cooperative task offloading strategy of multiple edge servers in multi-task and multi-user mobile scenarios in IoV was focused on. First, a three-layer heterogeneous network model for multi-edge collaborative processing was proposed, and dynamic collaborative clusters were introduced for the changing environment in IoV to transform the offloading problem into a joint optimization problem of delay and energy consumption. Then, the problem was divided into two subproblems of offloading decision and resource allocation, where the resource allocation problem was further split into resource allocation for edge servers and transmission bandwidth, and the two subproblems were solved based on convex optimization theory. In order to find the optimal offloading decision set, a Multi-edge Collaborative Deep Deterministic Policy Gradient (MC-DDPG) algorithm that can handle continuous problems in collaborative clusters was proposed, based on which an Asynchronous MC-DDPG (AMC-DDPG) algorithm was designed. The training parameters in collaborative clusters were asynchronously uploaded to the cloud for global update, and then the updated results were returned to each collaborative cluster to improve the convergence speed. Simulation results show that the AMC-DDPG algorithm improves the convergence speed by at least 30% over the DDPG algorithm and achieves better results in terms of reward and total cost.